The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
With the growth of high-dimensional sparse data in web-scale recommender systems, the computational cost to learn high-order feature interaction in CTR prediction task largely increases, which limits the use of high-order interaction models in real industrial applications. Some recent knowledge distillation based methods transfer knowledge from complex teacher models to shallow student models for accelerating the online model inference. However, they suffer from the degradation of model accuracy in knowledge distillation process. It is challenging to balance the efficiency and effectiveness of the shallow student models. To address this problem, we propose a Directed Acyclic Graph Factorization Machine (KD-DAGFM) to learn the high-order feature interactions from existing complex interaction models for CTR prediction via Knowledge Distillation. The proposed lightweight student model DAGFM can learn arbitrary explicit feature interactions from teacher networks, which achieves approximately lossless performance and is proved by a dynamic programming algorithm. Besides, an improved general model KD-DAGFM+ is shown to be effective in distilling both explicit and implicit feature interactions from any complex teacher model. Extensive experiments are conducted on four real-world datasets, including a large-scale industrial dataset from WeChat platform with billions of feature dimensions. KD-DAGFM achieves the best performance with less than 21.5% FLOPs of the state-of-the-art method on both online and offline experiments, showing the superiority of DAGFM to deal with the industrial scale data in CTR prediction task. Our implementation code is available at: https://github.com/RUCAIBox/DAGFM.
translated by 谷歌翻译
Alzheimer's Disease (AD), as the most devastating neurodegenerative disease worldwide, has reached nearly 10 million new cases annually. Current technology provides unprecedented opportunities to study the progression and etiology of this disease with the advanced in imaging techniques. With the recent emergence of a society driven by big data and machine learning (ML), researchers have exerted considerable effort to summarize recent advances in ML-based AD diagnosis. Here, we outline some of the most prevalent and recent ML models for assessing the progression of AD and provide insights on the challenges, opportunities, and future directions that could be advantageous to future research in AD using ML.
translated by 谷歌翻译
作为包含结构和特征信息的特殊信息载体,图被广泛用于图挖掘中,例如图形神经网络(GNNS)。但是,在某些实际情况下,图形数据分别存储在多个分布式各方中,由于利益冲突,可能不会直接共享。因此,提出了联合图神经网络来解决此类数据孤岛问题,同时保留各方(或客户)的隐私。然而,各方之间的不同图形数据分布(称为统计异质性)可能会降低诸如fedAvg之类的幼稚联合学习算法的性能。在本文中,我们提出了一个基于自我图形的联合图形学习框架Fedego,以应对上述挑战,每个客户将在此培训其本地模型,同时也为全球模型的培训做出贡献。 Fedego应用图形上的自我图形来充分利用结构信息,并利用混音来实现隐私问题。为了处理统计异质性,我们将个性化整合到学习中,并提出一种自适应混合系数策略,使客户能够实现最佳个性化。广泛的实验结果和深入分析证明了联邦的有效性。
translated by 谷歌翻译
图形卷积网络(GCN)已显示出容易受到小型对抗扰动的影响,这成为严重的威胁,并在很大程度上限制了其在关键安全场景中的应用。为了减轻这种威胁,大量的研究工作已致力于增加GCN对对抗攻击的鲁棒性。但是,当前的防御方法通常是为整个图表而设计的,并考虑了全球性能,在保护重要的本地节点免受更强的对抗性靶向攻击方面面临着挑战。在这项工作中,我们提出了一种简单而有效的方法,名为Graph Universal对抗防御(Guard)。与以前的作品不同,Guard可以保护每个单独的节点免受通用防御贴片的攻击,该节点是一次生成的,可以应用于图中的任何节点(节点-Agnostic)。在四个基准数据集上进行的广泛实验表明,我们的方法可显着提高几种已建立的GCN的鲁棒性,以针对多种对抗性攻击,并且胜过大幅度的最先进的防御方法。我们的代码可在https://github.com/edisonleeeeee/guard上公开获取。
translated by 谷歌翻译
联合学习允许多个参与者在不公开数据隐私的情况下协作培训高效模型。但是,这种分布式的机器学习培训方法容易受到拜占庭客户的攻击,拜占庭客户通过修改模型或上传假梯度来干扰全球模型的训练。在本文中,我们提出了一种基于联邦学习(CMFL)的新型无服务器联合学习框架委员会机制,该机制可以确保算法具有融合保证的鲁棒性。在CMFL中,设立了一个委员会系统,以筛选上载已上传的本地梯度。 The committee system selects the local gradients rated by the elected members for the aggregation procedure through the selection strategy, and replaces the committee member through the election strategy.基于模型性能和防御的不同考虑,设计了两种相反的选择策略是为了精确和鲁棒性。广泛的实验表明,与典型的联邦学习相比,与传统的稳健性相比,CMFL的融合和更高的准确性比传统的稳健性,以分散的方法的方式获得了传统的耐受性算法。此外,我们理论上分析并证明了在不同的选举和选择策略下CMFL的收敛性,这与实验结果一致。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译